Double Responses in Drift Diffusion Models

Stien Mommen, David Matischek, Jeroen Timmerman, Lexi Li, & Emma Akrong

March 29th, 2025

Evans et al. (2020)

Racing Diffusion Model (RDM)

  • Process of choosing between N alternatives N racing evidence accumulators

  • Assumptions:

    • One threshold level of evidence
    • Random uniform distribution of starting evidence
    • Drift rate for each alternative
    • Non-decision time

What is a double response (DR)?

Experimental Paradigm (Dutilh et al. (2009))

  • Lexical decision task word vs. non-word

  • Participants: 4

  • Trials: 10,000

  • Conditions: Speed vs.Accuracy (between subjects design)

  • DR implementation:

    • 250 ms to give second response
    • Participants were not instructed to give DRs (implicit)

Why include DR in model?

  • Additional information beyond response choice and RT

  • Better understanding of the decision-making process as a whole

BayesFlow

Aim

  • Determine if including DR will make the model learn more

    • RQ: Does including DR improve posterior contraction

Why BayesFlow?

  • MCMC more computationally costly

  • BayesFlow: training is costly, but inference is fast

Observation Models

RDM (base model for all)

dxi=[vi]dt+[σiϵ]dt 

dxi= change in evidence

vi= drift rate of choice i

dt= difference in time

σi= scale of within-trial noise for choice i

ϵ= random variable

Observation Models

Feed-forward inhibition (FFI)

  • Future evidence accumulation is reduced based on the accumulation rate of the competing alternative
dxi=[viβjinvj]dt+[σiϵβjinσjϵ]dt 

β= amount of inhibition

dxi= change in evidence

vi= drift rate of choice i

dt= difference in time

σi= scale of within-trial noise for choice i, ϵ= random variable

Observation Models

Leaky-competing accumulator (LCA)

  • Lateral inhibition with leakage + evidence cannot be negative
  • Leakage: rate at which the alternative’s accumulated evidence is reduced
dxi=[viλxiβjiNxj]dt+[σiϵ]dt 
where x>0

λ= leakage rate

β= amount of inhibition

dxi= change in evidence, vi= drift rate of choice i, dt= difference in time, σi= scale of within-trial noise for choice i, ϵ= random variable

Priors

Priors

Priors

Implementation in python (RDM)

@nb.jit(nopython=True, cache=True)
def trial(drift, starting_point, boundary, ndt, max_t, max_drt=0.25, s=1, dt=None):
    drift = np.asarray(drift, dtype=np.float64)  # Convert before passing to JIT
    response = -1
    rt = -1

    if dt is None:
        dt = max_t / 10_000.0  # Ensure float division

    t = 0
    start = float(starting_point)  # Ensure float type
    evidence = np.random.uniform(0, start, size=len(drift))

    boundary += start  # Adjust boundary based on start
    dr = False  # Initialize double response

    # Initial evidence accumulation
    while np.all(evidence < boundary) and t < max_t:
        for resp in range(len(drift)):  # Normal loop (prange if parallel)
            evidence[resp] += dt * drift[resp] + np.random.normal(0, np.sqrt(s**2 * dt))
        t += dt

    rt = t + ndt
    drt = 0
    response_arr = np.where(evidence > boundary)[0]  # Avoid tuple issue

    if response_arr.size > 0:
        response = response_arr[0]  # Take first element
    else:
        response = -1  # Default

    # Double response accumulation
    while drt < max_drt and not dr:
        for resp in range(len(drift)):
            if response != -1 and resp != response:
                evidence[resp] += dt * drift[resp] + np.random.normal(0, s) * np.sqrt(dt)
                if evidence[resp] >= boundary:
                    dr = True
                    break
        drt += dt  # Only increase while dr is False

    return rt, response, dr, drt

Diagnostics (RDM)


Results (RDM)


Results (RDM)


Results (RDM)


Results (RDM)


Results (RDM)


Results (FFI)


Results (FFI)


Results (FFI)


Results (FFI)


Diagnostics (LCA)


Results (LCA)


Results (LCA)


Results (LCA)


Results (LCA)


Results (LCA)


Key Findings

Adding double response did kind of lead to the model to learn more

RDM

FFI

LCA

Key Findings

It also kind of improved the posterior contraction

RDM

FFI

LCA

Limitations of our model

  • ⁠Priors can be more grounded in theory

  • ⁠More participants, more double response

Accuracy Participant 2

Initial Response First RT Double Response Double Response RT
6661 FALSE 0.4852 TRUE 0.0458
7372 FALSE 0.4194 TRUE 0.0540
9323 FALSE 0.4851 TRUE 0.0700

Directions for Future Research

  • Study design explicit double responses

  • Alternative definition of double response loser drift over takes winner drift

Thanks for listening <3